2025-05-21 12:36

banner unionsafete



Artificial Intelligence And Health & Safety Is One Of The Themes For International Workers Memorial Day 2025

Given the above headline, it is pertinent to draw attention to this paper produced by the GMB Trade Union in 2024 for their special report on 50 Years Of The Health and Safety At Work Act, and an analysis of whetheer or not it is sufficent to deal with the risks to worker's health and safety at work ; from Artificial Intelligence (AI):

The world of work has transformed dramatically since 1974. New technologies such as the Internet and Artificial Intelligence (AI) have emerged, with little regulation to date. Our Special Report to Congress 2022 on The Future of Work identified a number of concerns, although it did not focus specifically on health and safety risks.

Widespread automation is now a reality, as anyone who has been forced to use a self-checkout machine will recognize. Automation brings opportunities but also profound risks, and the challenge lies in creating a legal framework for workplace health and safety that is fit for the next 50 years.

The Health and Safety at Work Act was designed to be, to an extent, ‘future-proof.’ Its principles apply to all work activities, regardless of technological advancement. As the Robens Report put it, “The safety system must look to future possibilities as well as to past experience.”
While work equipment remains regulated, primarily through the Provision and Use of Work Equipment Regulations 1998 (PUWER), these regulations now face new tests. PUWER requires that all work equipment must be:

  • Suitable for its intended use;
  • Safe for use, maintained in a safe condition, and inspected to prevent deterioration;
  • Used only by individuals who have received adequate information, instruction, and training;
  • Equipped with suitable health and safety measures, including guarding, emergency stops, isolation devices, visible markings, and warning systems.

These requirements apply equally to robots as to hand tools. So why, if the law covers such equipment, is there still growing concern over automation and AI?

There are two key risks:

  • Automation may eliminate traditional hazards, such as manual handling, but introduce new ones, such as a relentless acceleration of work pace.
  • Overreliance on automation and AI may foster complacency, leading to catastrophic failures when automated systems malfunction or unforeseen risks arise.

Several examples illustrate these risks.

In the retail and logistics sectors, pick rates have been dramatically increased as Just-in-Time efficiencies improve through automation. The Manual Handling Operations Regulations 1992 mention a “rate of work imposed by a process” but set no upper limits. This has allowed employers to impose increasingly punishing productivity targets with little legal recourse for workers. Here, the issue lies not with automation itself, but with its human consequences.

image: GMB Report - click to download from the E-LibrarySimilarly, the use of mobile apps to direct work activities raises serious concerns. Although recent court decisions have clarified aspects of employment status, the sector remains a grey area in health and safety law. Pace of work, cumulative working hours, and the provision of protective equipment are often overlooked, dismissed by classifying workers as self-employed. Because these workers have no fixed workplace, incidents are individualized, limiting the opportunity to learn from them. A 2023 US Gig Workers Rising report revealed that 31 app workers were murdered while working in 2022. It is critical that the UK heed these warnings before similar patterns emerge.

The adoption of new technologies without full understanding of their health and safety implications is another urgent concern. The recent series of fires on electric buses in South London, which led to the recall of more than 1,750 vehicles, exemplifies the risks. Although no injuries occurred, the potential for fatal incidents was significant, particularly had the fires broken out during peak service times.

Self-driving vehicles pose further challenges. Since 2018, at least 29 fatalities have occurred in the USA involving autonomous vehicles. Nevertheless, the UK Government is moving forward with the Automated Vehicles Bill, which may have received Royal Assent by the time of Congress. Although the Bill establishes provisions for an Inspectorate, it remains transport legislation and does not fall under the scope of the Health and Safety at Work Act. Consequently, the precautionary approach that underpins health and safety law may not be applied. Particularly concerning is the potential deployment of self-driving trucks before the full extent of associated risks is understood.

Artificial Intelligence poses even greater systemic risks. AI is already being used to draft policies, procedures, and risk assessments, and predictive technologies can now anticipate potential incidents before they occur. However, these systems are neither fully tested nor infallible. Complacency may develop if hazards are assumed to be controlled by AI, increasing the risk of disaster should a system fail. Unlike human supervisors, AI lacks conscience—a critical factor in maintaining vigilance. It is telling that a 2024 Wales TUC report highlighted the replacement of human judgment as a major concern among workers experiencing AI integration in their workplaces.

Under current law, the wholesale automation of health and safety management is not permitted. Regulation 7 of The Management of Health and Safety at Work Regulations 1999 requires employers to appoint a ‘competent’ person—someone with the necessary skills, knowledge, and experience to manage health and safety effectively. The HSE has clarified that while AI use requires risk assessment, it does not insist that human intelligence must remain at the core of safety systems.

At present, the UK lacks a single regulatory body or comprehensive legal framework governing the creation, application, or use of AI. The Government's White Paper, A pro-innovation approach to AI regulation, proposes five principles:

  • Safety, security, and robustness;
  • Appropriate transparency and explainability;
  • Fairness;
  • Accountability and governance;
  • Contestability and redress.

However, 'safety' in this context focuses on personal, online, and medical safety—not workplace health and safety. Workers are not mentioned at all. Furthermore, the White Paper is explicit:

“We will not put these principles on a statutory footing initially. New rigid and onerous legislative requirements on businesses could hold back AI innovation and reduce our ability to respond quickly and in a proportionate way to future technological advances. Instead, the principles will be issued on a non-statutory basis and implemented by existing regulators.”

This leaves worker protections dependent on the adaptability and resources of existing regulatory bodies.


GMB believes this approach falls short of the precautionary principles fundamental to health and safety legislation. We need a regulatory framework that places appropriate checks and balances on both technology and employers—one that safeguards workers while encouraging innovation. Worker health and safety must be at the heart of any strategy to regulate emerging technologies.

Accordingly, GMB calls on the future Government to establish a tripartite commission—comprising Government, Employers, and Trades Unions—to examine the implications of AI and automation on worker health and safety, and to implement any necessary regulations arising from its recommendations. Source: GMB


Pic: Bak to News icon link

Designed, Hosted and Maintained by Union Safety Services